139 research outputs found

    SWIG users manual (version 1.1)

    Get PDF
    technical reportSWIG is a tool for solving problems. More specifically, SWIG is a simple tool for building interactive C, C++, or Objective-C programs with common scripting languages such as Tel, Perl, and Python. Of course, more importantly, SWIG is a tool for making C programming more enjoyable and promoting laziness (an essential feature). SWIG is not part of an overgrown software engineering project, an attempt to build some sort of monolithic programming environment, or an attempt to force everyone to rewrite all of their code (ie. code reuse). In fact, none of these things have ever been a priority. SWIG was originally developed in the Theoretical Physics Division at Los Alamos National Laboratory for building interfaces to large materials science research simulations being run on the Connection Machine 5 supercomputer

    A wrapper generation tool for the creation of scriptable scientific applications

    Get PDF
    Journal ArticleIn recent years, there has been considerable interest in the use of scripting languages as a mechanism for controlling and developing scientific software. Scripting languages allow scientific applications to be encapsulated in an interpreted environment similar to that found in commercial scientific packages such as MATLAB, Mathematica, and IDL. This improves the usability of scientific software by providing a powerful meachanism for specifyling and controlling cimplex problems as well as giving users an interactive and exploratory problem solving environment. Scripting languages also provide a framework for building and integrating software components that allows tools be used in a more efficient manner. This streamlines the problem solving process and enable scientists to be more productive

    Training Big Random Forests with Little Resources

    Full text link
    Without access to large compute clusters, building random forests on large datasets is still a challenging problem. This is, in particular, the case if fully-grown trees are desired. We propose a simple yet effective framework that allows to efficiently construct ensembles of huge trees for hundreds of millions or even billions of training instances using a cheap desktop computer with commodity hardware. The basic idea is to consider a multi-level construction scheme, which builds top trees for small random subsets of the available data and which subsequently distributes all training instances to the top trees' leaves for further processing. While being conceptually simple, the overall efficiency crucially depends on the particular implementation of the different phases. The practical merits of our approach are demonstrated using dense datasets with hundreds of millions of training instances.Comment: 9 pages, 9 Figure

    Python Programmers Have GPUs Too: Automatic Python Loop Parallelization with Staged Dependence Analysis

    Get PDF
    Python is a popular language for end-user software development in many application domains. End-users want to harness parallel compute resources effectively, by exploiting commodity manycore technology including GPUs. However, existing approaches to parallelism in Python are esoteric, and generally seem too complex for the typical end-user developer. We argue that implicit, or automatic, parallelization is the best way to deliver the benefits of manycore to end-users, since it avoids domain-specific languages, specialist libraries, complex annotations or restrictive language subsets. Auto-parallelization fits the Python philosophy, provides effective performance, and is convenient for non-expert developers. Despite being a dynamic language, we show that Python is a suitable target for auto-parallelization. In an empirical study of 3000+ open-source Python notebooks, we demonstrate that typical loop behaviour ‘in the wild’ is amenable to auto-parallelization. We show that staging the dependence analysis is an effective way to maximize performance. We apply classical dependence analysis techniques, then leverage the Python runtime’s rich introspection capabilities to resolve additional loop bounds and variable types in a just-in-time manner. The parallel loop nest code is then converted to CUDA kernels for GPU execution. We achieve orders of magnitude speedup over baseline interpreted execution and some speedup (up to 50x, although not consistently) over CPU JIT-compiled execution, across 12 loop-intensive standard benchmarks

    Investigating Whether A Social Media Presence Impacts Claim Severity

    Get PDF
    This project, sponsored by The Hanover Insurance Group, is a preliminary investigation into whether social media information can be used as a statistically significant factor in modeling workers’ compensation claim severity in the restaurant and food services industry. Factors including the existence of a website, Yelp page, or TripAdvisor page, were considered in this investigation. Through analysis of these social media variables, we created a generalized linear model of claim severity. While the specific data that we gathered and analyzed did not hold any predictive power, we still believe that social media is a worthwhile field to pursue, but in a different capacity

    Climate change winner in the deep sea? Predicting the impacts of climate change on the distribution of the glass sponge Vazella pourtalesii

    Get PDF
    Shallow-water sponges are often cited as being ‘climate change winners’ due to their resiliency against climate change effects compared to other benthic taxa. However, little is known of the impacts of climate change on deep-water sponges. The deep-water glass sponge Vazella pourtalesii is distributed off eastern North America, forming dense sponge grounds with enhanced biodiversity on the Scotian Shelf off Nova Scotia, Canada. While the strong natural environmental variability that characterizes these sponge grounds suggests this species is resilient to a changing environment, its physiological limitations remain unknown, and the impact of more persistent anthropogenic climate change on its distribution has never been assessed. We used Random Forest and generalized additive models to project the distribution of V. pourtalesii in the northwest Atlantic using environmental conditions simulated under moderate and worst-case CO2 emission scenarios. Under future (2046-2065) climate change, the suitable habitat of V. pourtalesii will increase up to 4 times its present-day size and shift into deeper waters and higher latitudes, particularly in its northern range where ocean warming will serve to improve the habitat surrounding this originally sub-tropical species. However, not all areas projected as suitable habitat in the future will realistically be populated, and the reduced likelihood of occurrence in its core habitat on the Scotian Shelf suggests that the existing Vazella sponge grounds may be negatively impacted. An effective monitoring programme will require tracking changes in the density and distribution of V. pourtalesii at the margins between core habitat and where losses and gains were projected.publishedVersio

    Climate change winner in the deep sea? Predicting the impacts of climate change on the distribution of the glass sponge Vazella pourtalesii

    Get PDF
    Shallow-water sponges are often cited as being ‘climate change winners’ due to their resiliency against climate change effects compared to other benthic taxa. However, little is known of the impacts of climate change on deep-water sponges. The deep-water glass sponge Vazella pourtalesii is distributed off eastern North America, forming dense sponge grounds with enhanced biodiversity on the Scotian Shelf off Nova Scotia, Canada. While the strong natural environmental variability that characterizes these sponge grounds suggests this species is resilient to a changing environment, its physiological limitations remain unknown, and the impact of more persistent anthropogenic climate change on its distribution has never been assessed. We used Random Forest and generalized additive models to project the distribution of V. pourtalesii in the northwest Atlantic using environmental conditions simulated under moderate and worst-case CO2 emission scenarios. Under future (2046-2065) climate change, the suitable habitat of V. pourtalesii will increase up to 4 times its present-day size and shift into deeper waters and higher latitudes, particularly in its northern range where ocean warming will serve to improve the habitat surrounding this originally sub-tropical species. However, not all areas projected as suitable habitat in the future will realistically be populated, and the reduced likelihood of occurrence in its core habitat on the Scotian Shelf suggests that the existing Vazella sponge grounds may be negatively impacted. An effective monitoring programme will require tracking changes in the density and distribution of V. pourtalesii at the margins between core habitat and where losses and gains were projected.publishedVersio

    National guidelines for digital modelling : case studies

    Get PDF
    These National Guidelines and Case Studies for Digital Modelling are the outcomes from one of a number of Building Information Modelling (BIM)-related projects undertaken by the CRC for Construction Innovation. Since the CRC opened its doors in 2001, the industry has seen a rapid increase in interest in BIM, and widening adoption. These guidelines and case studies are thus very timely, as the industry moves to model-based working and starts to share models in a new context called integrated practice. Governments, both federal and state, and in New Zealand are starting to outline the role they might take, so that in contrast to the adoption of 2D CAD in the early 90s, we ensure that a national, industry-wide benefit results from this new paradigm of working. Section 1 of the guidelines give us an overview of BIM: how it affects our current mode of working, what we need to do to move to fully collaborative model-based facility development. The role of open standards such as IFC is described as a mechanism to support new processes, and make the extensive design and construction information available to asset operators and managers. Digital collaboration modes, types of models, levels of detail, object properties and model management complete this section. It will be relevant for owners, managers and project leaders as well as direct users of BIM. Section 2 provides recommendations and guides for key areas of model creation and development, and the move to simulation and performance measurement. These are the more practical parts of the guidelines developed for design professionals, BIM managers, technical staff and ‘in the field’ workers. The guidelines are supported by six case studies including a summary of lessons learnt about implementing BIM in Australian building projects. A key aspect of these publications is the identification of a number of important industry actions: the need for BIMcompatible product information and a national context for classifying product data; the need for an industry agreement and setting process-for-process definition; and finally, the need to ensure a national standard for sharing data between all of the participants in the facility-development process
    • 

    corecore